Tute - Completed (Week 9)
Table of Contents
1. Type Inference
1.1. Examples
What is the most general type of the following implicitly-typed MinHs expressions?
recfun f x = ((fst x) + 1)
InL (InR True)
recfun f x = if (fst x) then (snd x) else f x
recfun f x = if (fst x) then (InL x) else (InR x)
What would their explicitly-typed equivalents be?
- \(\forall a.\ \texttt{Int} \times a \rightarrow \texttt{Int}\)
- \(\forall a.\ \forall b.\ (b + \texttt{Bool}) + a\)
- \(\forall b.\ (\texttt{Bool} \times b) \rightarrow b\)
- \(\forall b.\ (\texttt{Bool} \times b) \rightarrow ((\texttt{Bool} \times b) + (\texttt{Bool} \times b))\)
The explicitly typed versions are:
type a. recfun f :: (Int * a -> Int) x = ((fst@Int@a x) + 1)
type a. type b. InL@(b + Bool)@a (InR@b@Bool True)
type b. recfun f x = if (fst@Bool@b x) then (snd@Bool@b x) else f x
type b. recfun f x = if (fst@Bool@b x) then (InL@(Bool * b)@(Bool * b) x) else (InR@(Bool * b)@(Bool * b) x)
1.2. Inference Algorithm
In the lecture, we introduced two sets of typing rules for an implicitly typed variant of MinHs. Explain the difference between these rules.
The first set of rules don't specify an algorithm, as several rules require us to guess what types to put in the context, and several other rules (forall introduction and elimination) can be applied at any time.
The second set of rules fixed that problem by allowing unification variables (aka flexible type variables) to occur inside types, standing for types that are not yet known. As we do type checking, whenever we want two types to be equal, we find a way to extend the context with constraints on the unification variables unification variables such that the types become the same. In this way, as we complete type checking, we will generate a context that, when applied as substitutions to the types, will produce the most general type for a given term.
2. Existential Types
Existential types are often over-used in languages that support them. For each of the following existential types, propose a non-existential type that could be used instead:
- \(\exists t.\ t \times (t \rightarrow \texttt{String})\)
- \(\exists t.\ t \times (\texttt{Int} \times t \rightarrow t)\)
- \(\exists t.\ (t \rightarrow t \times t) \times (t \times t \rightarrow t)\)
- \(\texttt{String}\), seeing as consumers of the data type can only produce one string from a value of that type.
- \(\mathbf{1}\), seeing as there is no way to extract any information from that data type.
- \(\mathbf{1}\), seeing as there is no way to extract any information or indeed to construct an instance of the abstract data type.
3. Monadic programming
Let's use the list monad! Here's a fun puzzle, that you can think about for a while if you want.
1 2 3 4 5 🪦 🪦 🪦 🪦 🪦 A mole is disturbing a row of five graves. The mole is always underneath one of the graves. 1. Each day, we may attempt to catch the mole by digging up a grave. If we found the mole, we win; otherwise, we put the grave back in order and go to sleep. 2. Each night, the mole must move from its current position to an adjacent grave. Find a sequence of digs that is guaranteed to find the mole.
Write a function
move_mole : Int -> [Int]
such that, if the mole is at graven
initially,move_mole n
is the list of graves the mole might be at after its nightly move.move_mole :: Int -> [Int] move_mole 1 = [2] move_mole 5 = [4] move_mole n = [n-1,n+1]
Write a function
kill_mole : Int -> Int -> [Int]
. If we dig at graved
, and the mole is at positionm
,kill_mole d m
should return the empty list if we found the mole, and[m]
if the mole is still at loose.kill_mole :: Int -> Int -> [Int] kill_mole n m = if n == m then [] else [m]
Let's define the list monad! Write Haskell functions that inhabit the following type signatures, and think about whether they satisfy the monad laws:
myReturn :: a -> [a] myBind :: [a] -> (a -> [b]) -> [b]
myReturn x = [x] myBind xs f = concat(map f xs)
We can check the monad laws algebraically as follows (doing this in the tute might be overkill):
return x >>= f = concat (map f [x]) = concat([f x]) = f x xs >>= return = concat (map return xs) = xs xs >>= (\x -> f x >>= g) = concat (map (\x -> f x >>= g) xs) = concat (map (\x -> concat (map g (f x))) xs) = concat (map (concat . map g . f) xs) = concat (map concat (map (map g . f) xs)) = concat (concat (map (map g . f) xs)) = concat (concat (map (map g) (map f xs))) = concat (map g (concat (map f xs))) = concat (map g (xs >>= f)) = (xs >>= f) >>= g
The last derivation here uses the free theorems of
map
andconcat
, eta-equivalence, and the fact thatconcat(concat l) = concat(map concat l)
Here's an implementation of a
move
function for following a sequence of moves. If the mole is initially at positionm
, and we perform the sequence of digsxs
, thenmove xs m
is the list of final positions the mole could be at. (The final result may contain duplicates but we don't care.) Can you use the list monad (either the one you just defined, or the built-in one in Haskell) to simplify it?move :: [Int] -> Int -> [Int] move [] m = [m] move (x:xs) m = let ys = kill_mole x m zs = concat (map move_mole ys) in concat (map (move xs) zs)
move :: [Int] -> Int -> [Int] move (n:ns) pos = do pos' <- kill_mole n pos pos'' <- move_mole pos' move ns pos'' move [] pos = return pos
Using
do
notation andmove
, write a functionplay : [Int] -> [Int]
that returns the possible final locations of the mole after we've performed a sequence of moves, regardless of the mole's initial position.play xs = do pos <- [1,2,3,4,5] move xs pos